ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks

نویسندگان

  • Jinshan Zeng
  • Wotao Yin
چکیده

In this note, we extend the existing algorithms Extra [13] and subgradient-push [10] to a new algorithm ExtraPush for convex consensus optimization over a directed network. When the network is stationary, we propose a simplified algorithm called Normalized ExtraPush. These algorithms use a fixed step size like in Extra and accept the column-stochastic mixing matrices like in subgradient-push. We present preliminary analysis for ExtraPush under a bounded sequence assumption. For Normalized ExtraPush, we show that it naturally produces a bounded, linearly convergent sequence provided that the objective function is strongly convex.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Proximal Gradient Algorithm for Decentralized Composite Optimization over Directed Networks

This paper proposes a fast decentralized algorithm for solving a consensus optimization problem defined in a directed networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form, and are possibly nonconvex. Examples of such problems include decentralized compressed sensing and constrained quadratic programming problems, as well as many decentralize...

متن کامل

A Push-Pull Gradient Method for Distributed Optimization in Networks

In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents’ cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estima...

متن کامل

Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks

In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov’s accelerated gradient descent is optimal and achieves a precision ε > 0 in time O(κg(1 + ∆τ) ln(1/ε)), where κg is the co...

متن کامل

Communication-Efficient Algorithms for Decentralized and Stochastic Optimization

We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralize...

متن کامل

Decentralized Convex Optimization for Wireless Sensor Networks

Many real-world applications arising in domains such as large-scale machine learning, wired and wireless networks can be formulated as distributed linear least-squares over a large network. These problems often have their data naturally distributed. For instance applications such as seismic imaging, smart grid have the sensors geographically distributed and the current algorithms to analyze the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1511.02942  شماره 

صفحات  -

تاریخ انتشار 2015